Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Commun ; 15(1): 3407, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38649694

RESUMO

The perception and neural processing of sensory information are strongly influenced by prior expectations. The integration of prior and sensory information can manifest through distinct underlying mechanisms: focusing on unexpected input, denoted as prediction error (PE) processing, or amplifying anticipated information via sharpened representation. In this study, we employed computational modeling using deep neural networks combined with representational similarity analyses of fMRI data to investigate these two processes during face perception. Participants were cued to see face images, some generated by morphing two faces, leading to ambiguity in face identity. We show that expected faces were identified faster and perception of ambiguous faces was shifted towards priors. Multivariate analyses uncovered evidence for PE processing across and beyond the face-processing hierarchy from the occipital face area (OFA), via the fusiform face area, to the anterior temporal lobe, and suggest sharpened representations in the OFA. Our findings support the proposition that the brain represents faces grounded in prior expectations.

2.
Sci Rep ; 14(1): 8739, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627572

RESUMO

Inspired by recent findings in the visual domain, we investigated whether the stimulus-evoked pupil dilation reflects temporal statistical regularities in sequences of auditory stimuli. We conducted two preregistered pupillometry experiments (experiment 1, n = 30, 21 females; experiment 2, n = 31, 22 females). In both experiments, human participants listened to sequences of spoken vowels in two conditions. In the first condition, the stimuli were presented in a random order and, in the second condition, the same stimuli were presented in a sequence structured in pairs. The second experiment replicated the first experiment with a modified timing and number of stimuli presented and without participants being informed about any sequence structure. The sound-evoked pupil dilation during a subsequent familiarity task indicated that participants learned the auditory vowel pairs of the structured condition. However, pupil diameter during the structured sequence did not differ according to the statistical regularity of the pair structure. This contrasts with similar visual studies, emphasizing the susceptibility of pupil effects during statistically structured sequences to experimental design settings in the auditory domain. In sum, our findings suggest that pupil diameter may serve as an indicator of sound pair familiarity but does not invariably respond to task-irrelevant transition probabilities of auditory sequences.


Assuntos
Pupila , Som , Feminino , Humanos , Pupila/fisiologia , Reconhecimento Psicológico , Percepção Auditiva/fisiologia
3.
Eur J Neurosci ; 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38469976

RESUMO

In everyday perception, we combine incoming sensory information with prior expectations. Expectations can be induced by cues that indicate the probability of following sensory events. The information provided by cues may differ and hence lead to different levels of uncertainty about which event will follow. In this experiment, we employed pupillometry to investigate whether the pupil dilation response to visual cues varies depending on the level of cue-associated uncertainty about a following auditory outcome. Also, we tested whether the pupil dilation response reflects the amount of surprise about the subsequently presented auditory stimulus. In each trial, participants were presented with a visual cue (face image) which was followed by an auditory outcome (spoken vowel). After the face cue, participants had to indicate by keypress which of three auditory vowels they expected to hear next. We manipulated the cue-associated uncertainty by varying the probabilistic cue-outcome contingencies: One face was most likely followed by one specific vowel (low cue uncertainty), another face was equally likely followed by either of two vowels (intermediate cue uncertainty) and the third face was followed by all three vowels (high cue uncertainty). Our results suggest that pupil dilation in response to task-relevant cues depends on the associated uncertainty, but only for large differences in the cue-associated uncertainty. Additionally, in response to the auditory outcomes, the pupil dilation scaled negatively with the cue-dependent probabilities, likely signalling the amount of surprise.

4.
Commun Biol ; 6(1): 135, 2023 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-36725984

RESUMO

Perception is an active inference in which prior expectations are combined with sensory input. It is still unclear how the strength of prior expectations is represented in the human brain. The strength, or precision, of a prior could be represented with its content, potentially in higher-level sensory areas. We used multivariate analyses of functional resonance imaging data to test whether expectation strength is represented together with the expected face in high-level face-sensitive regions. Participants were trained to associate images of scenes with subsequently presented images of different faces. Each scene predicted three faces, each with either low, intermediate, or high probability. We found that anticipation enhances the similarity of response patterns in the face-sensitive anterior temporal lobe to response patterns specifically associated with the image of the expected face. In contrast, during face presentation, activity increased for unexpected faces in a typical prediction error network, containing areas such as the caudate and the insula. Our findings show that strength-dependent face expectations are represented in higher-level face-identity areas, supporting hierarchical theories of predictive processing according to which higher-level sensory regions represent weighted priors.


Assuntos
Mapeamento Encefálico , Motivação , Humanos , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Análise Multivariada , Neuroimagem Funcional
5.
J Acoust Soc Am ; 152(2): 820, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-36050169

RESUMO

Different speakers produce the same intended vowel with very different physical properties. Fundamental frequency (F0) and formant frequencies (FF), the two main parameters that discriminate between voices, also influence vowel perception. While it has been shown that listeners comprehend speech more accurately if they are familiar with a talker's voice, it is still unclear how such prior information is used when decoding the speech stream. In three online experiments, we examined the influence of speaker context via F0 and FF shifts on the perception of /o/-/u/ vowel contrasts. Participants perceived vowels from an /o/-/u/ continuum shifted toward /u/ when F0 was lowered or FF increased relative to the original speaker's voice and vice versa. This shift was reduced when the speakers were presented in a block-wise context compared to random order. Conversely, the original base voice was perceived to be shifted toward /u/ when presented in the context of a low F0 or high FF speaker, compared to a shift toward /o/ with high F0 or low FF speaker context. These findings demonstrate that that F0 and FF jointly influence vowel perception in speaker context.


Assuntos
Percepção da Fala , Voz , Humanos , Fonética , Fala , Acústica da Fala
6.
Commun Biol ; 5(1): 896, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-36050393

RESUMO

Similarity-based categorization can be performed by memorizing category members as exemplars or by abstracting the central tendency of the category - the prototype. In similarity-based categorization of stimuli with clearly identifiable dimensions from two categories, prototype representations were previously located in the hippocampus and the ventromedial prefrontal cortex (vmPFC) and exemplar representations in areas supporting visual memory. However, the neural implementation of exemplar and prototype representations in perceptual similarity-based categorization of single categories is unclear. To investigate these representations, we applied model-based univariate and multivariate analyses of functional imaging data from a dot-pattern paradigm-based task. Univariate prototype and exemplar representations occurred bilaterally in visual areas. Multivariate analyses additionally identified prototype representations in parietal areas and exemplar representations in the hippocampus. Bayesian analyses supported the non-presence of prototype representations in the hippocampus and the vmPFC. We additionally demonstrate that some individuals form both representation types simultaneously, probably granting flexibility in categorization strategies.


Assuntos
Memória , Córtex Pré-Frontal , Teorema de Bayes , Hipocampo , Humanos , Córtex Pré-Frontal/diagnóstico por imagem
7.
J Neurosci ; 42(31): 6108-6120, 2022 08 03.
Artigo em Inglês | MEDLINE | ID: mdl-35760528

RESUMO

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase-locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual, or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6 Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase-locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared with audio-only speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus did not show above-chance partial coherence with visual speech signals during AV conditions but did show partial coherence in visual-only conditions. Hence, visual speech enabled stronger phase-locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.SIGNIFICANCE STATEMENT Verbal communication in noisy environments is challenging, especially for hearing-impaired individuals. Seeing facial movements of communication partners improves speech perception when auditory signals are degraded or absent. The neural mechanisms supporting lip-reading or audio-visual benefit are not fully understood. Using MEG recordings and partial coherence analysis, we show that speech information is used differently in brain regions that respond to auditory and visual speech. While visual areas use visual speech to improve phase-locking to auditory speech signals, auditory areas do not show phase-locking to visual speech unless auditory speech is absent and visual speech is used to substitute for missing auditory signals. These findings highlight brain processes that combine visual and auditory signals to support speech understanding.


Assuntos
Córtex Auditivo , Percepção da Fala , Córtex Visual , Estimulação Acústica , Córtex Auditivo/fisiologia , Percepção Auditiva , Feminino , Humanos , Leitura Labial , Masculino , Fala/fisiologia , Percepção da Fala/fisiologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia
8.
Neuroimage ; 231: 117824, 2021 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-33549756

RESUMO

The expectation-suppression effect - reduced stimulus-evoked responses to expected stimuli - is widely considered to be an empirical hallmark of reduced prediction errors in the framework of predictive coding. Here we challenge this notion by proposing that that expectation suppression could be explained by a reduced attention effect. Specifically, we argue that reduced responses to predictable stimuli can also be explained by a reduced saliency-driven allocation of attention. We base our discussion mainly on findings in the visual cortex and propose that resolving this controversy requires the assessment of qualitative differences between the ways in which attention and surprise enhance brain responses.


Assuntos
Atenção/fisiologia , Encéfalo/fisiologia , Motivação/fisiologia , Neuroimagem/métodos , Estimulação Luminosa/métodos , Encéfalo/diagnóstico por imagem , Previsões , Humanos
9.
J Neurosci ; 39(9): 1720-1732, 2019 02 27.
Artigo em Inglês | MEDLINE | ID: mdl-30643025

RESUMO

Developmental dyslexia is characterized by the inability to acquire typical reading and writing skills. Dyslexia has been frequently linked to cerebral cortex alterations; however, recent evidence also points toward sensory thalamus dysfunctions: dyslexics showed reduced responses in the left auditory thalamus (medial geniculate body, MGB) during speech processing in contrast to neurotypical readers. In addition, in the visual modality, dyslexics have reduced structural connectivity between the left visual thalamus (lateral geniculate nucleus, LGN) and V5/MT, a cerebral cortex region involved in visual movement processing. Higher LGN-V5/MT connectivity in dyslexics was associated with the faster rapid naming of letters and numbers (RANln), a measure that is highly correlated with reading proficiency. Here, we tested two hypotheses that were directly derived from these previous findings. First, we tested the hypothesis that dyslexics have reduced structural connectivity between the left MGB and the auditory-motion-sensitive part of the left planum temporale (mPT). Second, we hypothesized that the amount of left mPT-MGB connectivity correlates with dyslexics RANln scores. Using diffusion tensor imaging-based probabilistic tracking, we show that male adults with developmental dyslexia have reduced structural connectivity between the left MGB and the left mPT, confirming the first hypothesis. Stronger left mPT-MGB connectivity was not associated with faster RANln scores in dyslexics, but was in neurotypical readers. Our findings provide the first evidence that reduced cortico-thalamic connectivity in the auditory modality is a feature of developmental dyslexia and it may also affect reading-related cognitive abilities in neurotypical readers.SIGNIFICANCE STATEMENT Developmental dyslexia is one of the most widespread learning disabilities. Although previous neuroimaging research mainly focused on pathomechanisms of dyslexia at the cerebral cortex level, several lines of evidence suggest an atypical functioning of subcortical sensory structures. By means of diffusion tensor imaging, we here show that dyslexic male adults have reduced white matter connectivity in a cortico-thalamic auditory pathway between the left auditory motion-sensitive planum temporale and the left medial geniculate body. Connectivity strength of this pathway was associated with measures of reading fluency in neurotypical readers. This is novel evidence on the neurocognitive correlates of reading proficiency, highlighting the importance of cortico-subcortical interactions between regions involved in the processing of spectrotemporally complex sound.


Assuntos
Conectoma , Dislexia/fisiopatologia , Corpos Geniculados/fisiopatologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiopatologia , Dislexia/diagnóstico por imagem , Corpos Geniculados/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Masculino
10.
J Neurosci ; 38(27): 6076-6089, 2018 07 04.
Artigo em Inglês | MEDLINE | ID: mdl-29891730

RESUMO

Humans use prior expectations to improve perception, especially of sensory signals that are degraded or ambiguous. However, if sensory input deviates from prior expectations, then correct perception depends on adjusting or rejecting prior expectations. Failure to adjust or reject the prior leads to perceptual illusions, especially if there is partial overlap (and thus partial mismatch) between expectations and input. With speech, "slips of the ear" occur when expectations lead to misperception. For instance, an entomologist might be more susceptible to hear "The ants are my friends" for "The answer, my friend" (in the Bob Dylan song Blowing in the Wind). Here, we contrast two mechanisms by which prior expectations may lead to misperception of degraded speech. First, clear representations of the common sounds in the prior and input (i.e., expected sounds) may lead to incorrect confirmation of the prior. Second, insufficient representations of sounds that deviate between prior and input (i.e., prediction errors) could lead to deception. We used crossmodal predictions from written words that partially match degraded speech to compare neural responses when male and female human listeners were deceived into accepting the prior or correctly reject it. Combined behavioral and multivariate representational similarity analysis of fMRI data show that veridical perception of degraded speech is signaled by representations of prediction error in the left superior temporal sulcus. Instead of using top-down processes to support perception of expected sensory input, our findings suggest that the strength of neural prediction error representations distinguishes correct perception and misperception.SIGNIFICANCE STATEMENT Misperceiving spoken words is an everyday experience, with outcomes that range from shared amusement to serious miscommunication. For hearing-impaired individuals, frequent misperception can lead to social withdrawal and isolation, with severe consequences for wellbeing. In this work, we specify the neural mechanisms by which prior expectations, which are so often helpful for perception, can lead to misperception of degraded sensory signals. Most descriptive theories of illusory perception explain misperception as arising from a clear sensory representation of features or sounds that are in common between prior expectations and sensory input. Our work instead provides support for a complementary proposal: that misperception occurs when there is an insufficient sensory representations of the deviation between expectations and sensory signals.


Assuntos
Encéfalo/fisiologia , Ilusões/fisiologia , Motivação/fisiologia , Percepção da Fala/fisiologia , Adolescente , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
11.
Neuroimage ; 178: 721-734, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29772380

RESUMO

The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition.


Assuntos
Mapeamento Encefálico/métodos , Corpos Geniculados/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
12.
PLoS Biol ; 14(11): e1002577, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27846209

RESUMO

Successful perception depends on combining sensory input with prior knowledge. However, the underlying mechanism by which these two sources of information are combined is unknown. In speech perception, as in other domains, two functionally distinct coding schemes have been proposed for how expectations influence representation of sensory evidence. Traditional models suggest that expected features of the speech input are enhanced or sharpened via interactive activation (Sharpened Signals). Conversely, Predictive Coding suggests that expected features are suppressed so that unexpected features of the speech input (Prediction Errors) are processed further. The present work is aimed at distinguishing between these two accounts of how prior knowledge influences speech perception. By combining behavioural, univariate, and multivariate fMRI measures of how sensory detail and prior expectations influence speech perception with computational modelling, we provide evidence in favour of Prediction Error computations. Increased sensory detail and informative expectations have additive behavioural and univariate neural effects because they both improve the accuracy of word report and reduce the BOLD signal in lateral temporal lobe regions. However, sensory detail and informative expectations have interacting effects on speech representations shown by multivariate fMRI in the posterior superior temporal sulcus. When prior knowledge was absent, increased sensory detail enhanced the amount of speech information measured in superior temporal multivoxel patterns, but with informative expectations, increased sensory detail reduced the amount of measured information. Computational simulations of Sharpened Signals and Prediction Errors during speech perception could both explain these behavioural and univariate fMRI observations. However, the multivariate fMRI observations were uniquely simulated by a Prediction Error and not a Sharpened Signal model. The interaction between prior expectation and sensory detail provides evidence for a Predictive Coding account of speech perception. Our work establishes methods that can be used to distinguish representations of Prediction Error and Sharpened Signals in other perceptual domains.


Assuntos
Imageamento por Ressonância Magnética/métodos , Percepção da Fala , Comportamento , Humanos , Modelos Teóricos , Análise Multivariada , Lobo Temporal/fisiologia
13.
Hum Brain Mapp ; 36(1): 324-39, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25220190

RESUMO

Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face- and voice-sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross-modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of face identity or physical facial properties in these areas. To address this question, we used functional magnetic resonance imaging in humans and a voice-face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face-sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas. The activity and connectivity profiles differed between face-sensitive areas: (i) the occipital face area seemed to receive information about both physical properties and identity, (ii) the fusiform face area seemed to receive identity, and (iii) the anterior temporal lobe seemed to receive predominantly identity information from the voice. We interpret these results within a prediction coding scheme in which both identity and physical property information is used across sensory modalities to recognize individuals.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico , Sensação/fisiologia , Estimulação Acústica , Adulto , Encéfalo/irrigação sanguínea , Mapeamento Encefálico , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Estimulação Luminosa , Psicofísica , Tempo de Reação/fisiologia , Adulto Jovem
14.
Neurosci Biobehav Rev ; 47: 717-34, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25451765

RESUMO

Recognizing other persons is a key skill in social interaction, whether it is with our family at home or with our colleagues at work. Due to brain lesions such as stroke, or neurodegenerative disease, or due to psychiatric conditions, abilities in recognizing even personally familiar persons can be impaired. The underlying causes in the human brain have not yet been well understood. Here, we provide a comprehensive overview of studies reporting locations of brain damage in patients impaired in person-identity recognition, and relate the results to a quantitative meta-analysis based on functional imaging studies investigating person-identity recognition in healthy individuals. We identify modality-specific brain areas involved in recognition from different person characteristics, and potential multimodal hubs for person processing in the anterior temporal, frontal, and parietal lobes and posterior cingulate. Our combined review is built on cognitive and neuroscientific models of face- and voice-identity recognition and revises them within the multimodal context of person-identity recognition. These results provide a novel framework for future research in person-identity recognition both in the clinical as well as basic neurosciences.


Assuntos
Encéfalo/fisiologia , Relações Interpessoais , Reconhecimento Psicológico/fisiologia , Face , Humanos
15.
J Neurosci ; 33(9): 3939-52, 2013 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-23447604

RESUMO

Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing.


Assuntos
Ondas Encefálicas/fisiologia , Encéfalo/fisiologia , Tomada de Decisões/fisiologia , Punição/psicologia , Percepção de Tamanho/fisiologia , Adulto , Análise Discriminante , Discriminação Psicológica , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Análise Espectral , Adulto Jovem
16.
Neuroimage ; 65: 109-18, 2013 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-23023154

RESUMO

Speech recognition from visual-only faces is difficult, but can be improved by prior information about what is said. Here, we investigated how the human brain uses prior information from auditory speech to improve visual-speech recognition. In a functional magnetic resonance imaging study, participants performed a visual-speech recognition task, indicating whether the word spoken in visual-only videos matched the preceding auditory-only speech, and a control task (face-identity recognition) containing exactly the same stimuli. We localized a visual-speech processing network by contrasting activity during visual-speech recognition with the control task. Within this network, the left posterior superior temporal sulcus (STS) showed increased activity and interacted with auditory-speech areas if prior information from auditory speech did not match the visual speech. This mismatch-related activity and the functional connectivity to auditory-speech areas were specific for speech, i.e., they were not present in the control task. The mismatch-related activity correlated positively with performance, indicating that posterior STS was behaviorally relevant for visual-speech recognition. In line with predictive coding frameworks, these findings suggest that prediction error signals are produced if visually presented speech does not match the prediction from preceding auditory speech, and that this mechanism plays a role in optimizing visual-speech recognition by prior information.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Adulto Jovem
17.
J Neurosci ; 31(36): 12906-15, 2011 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-21900569

RESUMO

Currently, there are two opposing models for how voice and face information is integrated in the human brain to recognize person identity. The conventional model assumes that voice and face information is only combined at a supramodal stage (Bruce and Young, 1986; Burton et al., 1990; Ellis et al., 1997). An alternative model posits that areas encoding voice and face information also interact directly and that this direct interaction is behaviorally relevant for optimizing person recognition (von Kriegstein et al., 2005; von Kriegstein and Giraud, 2006). To disambiguate between the two different models, we tested for evidence of direct structural connections between voice- and face-processing cortical areas by combining functional and diffusion magnetic resonance imaging. We localized, at the individual subject level, three voice-sensitive areas in anterior, middle, and posterior superior temporal sulcus (STS) and face-sensitive areas in the fusiform gyrus [fusiform face area (FFA)]. Using probabilistic tractography, we show evidence that the FFA is structurally connected with voice-sensitive areas in STS. In particular, our results suggest that the FFA is more strongly connected to middle and anterior than to posterior areas of the voice-sensitive STS. This specific structural connectivity pattern indicates that direct links between face- and voice-recognition areas could be used to optimize human person recognition.


Assuntos
Córtex Cerebral/fisiologia , Face , Reconhecimento Psicológico/fisiologia , Voz , Estimulação Acústica , Adulto , Inteligência Artificial , Mapeamento Encefálico , Interpretação Estatística de Dados , Imagem de Tensor de Difusão , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/fisiologia , Estimulação Luminosa , Prosopagnosia/patologia , Lobo Temporal/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...